-
Notifications
You must be signed in to change notification settings - Fork 375
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat(gnovm): add benchmark system #1624
Conversation
Codecov ReportAll modified and coverable lines are covered by tests ✅
Additional details and impacted files@@ Coverage Diff @@
## master #1624 +/- ##
=======================================
Coverage 56.42% 56.43%
=======================================
Files 439 439
Lines 66558 66558
=======================================
+ Hits 37558 37562 +4
+ Misses 26087 26081 -6
- Partials 2913 2915 +2
Flags with carried forward coverage won't be shown. Click here to find out more. ☔ View full report in Codecov by Sentry. |
@thehowl They won't get picked by the benchmark charts by default, you have to create a rule first. All new benchmarks will be added afterward. There is a check on the PR template pointing to the instructions on how to do it: https://github.com/gnolang/gno/blob/master/.github/pull_request_template.md?plain=1 In any case, the Github action runner needed for the tests never was deployed AFAIK. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you for the benchmarks 🙏
I was attempting some changes with performance ramifications and wanted to measure their impact. Eventually, they turned out to be bad performance-wise, but the code I made for measuring performance is probably still useful. This PR sets up a basic performance measurement system for the GnoVM. It leverages Go's existing benchmarking system together with templates. It allows us to measure impact on performance using standard go tools like [benchstat](https://pkg.go.dev/golang.org/x/perf/cmd/benchstat). ```console $ go test -bench . -run NONE . goos: linux goarch: amd64 pkg: github.com/gnolang/gno/gnovm/pkg/gnolang cpu: AMD Ryzen 7 PRO 4750U with Radeon Graphics BenchmarkPreprocess-16 7737 152430 ns/op BenchmarkBenchdata/fib.gno_param:4-16 47374 24664 ns/op BenchmarkBenchdata/fib.gno_param:8-16 10000 213632 ns/op BenchmarkBenchdata/fib.gno_param:16-16 100 10590462 ns/op BenchmarkBenchdata/loop.gno-16 2556032 509.3 ns/op BenchmarkBenchdata/matrix_mult.gno_param:3-16 1976 590535 ns/op BenchmarkBenchdata/matrix_mult.gno_param:4-16 740 1703166 ns/op BenchmarkBenchdata/matrix_mult.gno_param:5-16 180 6797221 ns/op BenchmarkBenchdata/matrix_mult.gno_param:6-16 36 34389320 ns/op BenchmarkCreateNewMachine-16 80518 13819 ns/op PASS ok github.com/gnolang/gno/gnovm/pkg/gnolang 16.790s ``` A few benchmarks are provided to show its functioning and benchmark some initial functions. `loop.gno` brings over the `LoopyMain` existing benchmark, and makes its output more useful by correlating it with `b.N`. Making this to have a preliminary review and open up discussions: - I removed `go_bench_data.go`, because as the name suggests it tests out things in Go and isn't actually at all related to the GnoVM. - Do we want to keep these benchmarks in `misc/` or somewhere? Or has the information gathered from these tests already been used to act, and as such they can be removed? - Was something similar already in the works? @deelawn @petar-dambovaliev - Is it OK to place the benchmarking files in `benchdata`, or would we prefer another location? - Would these benchmarks be automatically picked up for gnolang/benchmarks? cc/ @ajnavarro @albttx
I was attempting some changes with performance ramifications and wanted to measure their impact. Eventually, they turned out to be bad performance-wise, but the code I made for measuring performance is probably still useful.
This PR sets up a basic performance measurement system for the GnoVM. It leverages Go's existing benchmarking system together with templates. It allows us to measure impact on performance using standard go tools like benchstat.
A few benchmarks are provided to show its functioning and benchmark some initial functions.
loop.gno
brings over theLoopyMain
existing benchmark, and makes its output more useful by correlating it withb.N
.Making this to have a preliminary review and open up discussions:
go_bench_data.go
, because as the name suggests it tests out things in Go and isn't actually at all related to the GnoVM.misc/
or somewhere? Or has the information gathered from these tests already been used to act, and as such they can be removed?benchdata
, or would we prefer another location?